Others have addressed most of the issues but I’d like to address another issue.
Claims that have explanatory reasons are better than claims that don’t.
This isn’t at all relevant. In the Soviet-Poland experiment the two groups were asked to give a probability estimate. Whether they are better in some sense is a distinct claim about whether or not the probability of one or the other is higher. Even if one prefers claims with explanatory power to those that don’t, this doesn’t make those claims more probable. Indeed, the fact that they are easier to falsify in the Popper sense can be thought of as connected in some way to the fact that in a Bayesian sense they are more improbable.
But you can’t argue that because the class of hypotheses is preferred people should be willing to assign a higher probability to them.
“Probability estimate” is a technical term which most people don’t know. That isn’t a bias or a fault. When asked to give one, especially using a phrase like “which is more probable” which is ambiguous (often used in non-mathematical sense. whereas the term “probability estimate” isn’t), they guess what you want and try to give that instead. Make sense so far?
But this is nullified in the colour die experiment of Tversky and Kahneman, as detailed in the Conjunction Fallacy article, as linked by HonoreDB in a comment in this very topic, as so far unaddressed by you.
Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequences of greens (G) and reds (R) will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you chose appears on successive rolls of the die. Please check the sequence of greens and reds on which you prefer to bet.
RGRRR
GRGRRR
GRRRRR
65% still chose sequence 2 despite it being a conjunction of sequence 1 with another event. There was actual money on the line if the students chose a winning sequence. There is no plausible way that the students could have misinterpreted this question because of ambiguous understandings of phrases involving “probability”.
I find your claim that people interpret “which is more probable” as asking something other than which is more probable to be dubious at best, but it completely fails to explain the above study result.
I’m not calling it anything, and I didn’t use the word “bias” once. Perhaps people who have studied probability theory would have fared better on the task. But this is kind of the point, isn’t it? The issue is that when a person intuitively tries to make a judgement on probabilities, their intuition gives them the wrong result, because it seems to use a heuristic based on representativeness rather than actual probability.
Nobody is saying that this heuristic isn’t most likely more than adequate in the majority of normal day-to-day life. You seem to be conflating the claim that people use some kind of representativeness heuristic, rather than correct probability analysis, in probability estimation problems—which appears to be true, see above—with some kind of strawman claim you have invented that people are stupid idiots whose probability estimation faculties are so bad they reliably get every single probability estimation problem wrong every time, lolmorons. People here are posting in defense of the first one, you keep attacking as if they are defending the second one.
The hypothesis
a representativeness heuristic was easier for evolution to implement than a full, correct model of probability theory, and it did the job well enough to be a worthwhile substitute, thus humans will incorrectly choose representative cases rather than more probable cases when attempting to choose more probable cases
predicts the results of all the studies strongly. Your contrary hypothesis, which to me appears to be along the lines of
humans DO implement correct probability (i.e. they do not implement it incorrectly) but psychologists are determined to make humans look stupid in order to laugh at them, thus they craft experiments that exploit ambiguity over “probability” to make it look like humans use a representativeness heuristic when they in fact do not
predicts the results of some of the studies, but not the red/green die rolling one.
So now we are waiting for your post hoc justification of how test subjects who knew they were seeking a strictly more probable case (because they were hoping to win money) still got the wrong answer.
The issue is that when a person intuitively tries to make a judgement on probabilities
Not intuitively. They’re in a situation outside their daily routine that they don’t have good intuition about. They are making a judgment not because they think they know what they are doing but because they were asked to.
Can you see how artificial situations are not representative of people’s ability to cope with life? How putting people in artificial situations designed to fool them is itself a biased technique which lends itself to reaching a particular conclusion?
How putting people in artificial situations designed to fool them is itself a biased technique which lends itself to reaching a particular conclusion?
(emphasis mine)
I understand this criticism when applied to the Linda experiment, but not when it is applied to the color experiment. There was no “trickery” here.
To avoid more post-hockery, please propose a concrete experiment testing the conjunction fallacy, but controlling for “tricks” inherent in the design.
Not all the experiments are the same thing, just because they reached the same conclusion.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
The paper doesn’t describe the test conditions (that makes it unscholarly. scientific papers are supposed to have things like “possible sources of error” sections and describe their experimental procedure carefully. It doesn’t even say if it was double blind. Presumably not. If not, an explanation of why it doesn’t need to be is required but not provided.). There’s various issues there, e.g. pressure not to ask questions could easily have been present.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well. It has a consistent bias: people are better at real life than artificial situations designed in such a way that people will fail.
EDIT: Look at this sentence in the 1983 paper:
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
See. Biased. Specifically designed to fool people. They directly admit it. You never read the paper, right? You just heard a summary of the conclusions and trusted that they had used the methods of science, which they had not.
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
See. Biased. Specifically designed to fool people.
What that says is that the test in question addresses only one of two obvious questions about the conjunction fallacy. It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice. It’s like if they were trying to find out if overdosing on a certain vitamin can be fatal: They’ll give their (animal) subjects some absurd amount of it, and see what happens, and if that turns out to be fatal, then they go trying different amounts to see where the effect starts. (That way they know to look at, say, kidney function, if that’s what killed the first batch of subjects, rather than having to test all the different body systems to find out what’s going on.) All the first test tells you is whether that kind of overdose is possible at all.
Just because it doesn’t answer every question you’d like it to doesn’t mean that it doesn’t answer any questions.
Do you understand that the situation of “someone using his intelligence to try to fool you” and the situation “living life” are different? Studies about the former do not give results about the latter. The only valid conclusion from this study is “people can sometimes be fooled, on purpose”. But that isn’t the conclusion it claims to support. Being tricked by people intentionally is different than being inherently biased.
The point is “people can be fooled into making this specific mistake”, which is an indication that that specific mistake is one that people do make in some circumstances, rather than that specific mistake not being made at all. (As a counterexample, I imagine that it would be rather hard to trick someone into claiming that if you put two paperclips in a bowl, and then added two more paperclips to that bowl, and then counted the paperclips, you’d find three—that’s a mistake that people don’t make, even if someone tries to trick them.)
“Some circumstances” might only be “when someone is trying to trick them”, but even if that’s true (which the experiment with the dice suggests it’s not) that’s not as far removed from “living life” as you’re trying to claim—people do try to trick each other in real life, and it’s not too unusual to encounter other situations that are just as tricky as dealing with a malicious human.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well.
This is exhausting. Whatever this “real life” of yours is, it’s so boring, predictable, and uniform as to be not worth living. Who hasn’t had to adapt to new (yes, even “artificial”) situations before? Who hasn’t played a game? Who hasn’t bet on the outcome of an event?
Propose an experiment already. My patience is waning.
Yes. If there’s no way to test your claim that systemic pseudoscientific practice brought about the conjunction fallacy, how can we know?
I have something like a thirty percent chance of correctly determining whether or not you will judge a given experimental design to be biased. Therefore I surely can’t do it.
We should just believe the best experiment anyone did, even if it’s no good?
I don’t know of any experiment that would be any good. I think the experiments of this kind should stop, pending a new idea about how to do fundamentally better ones.
They’re in a situation outside their daily routine that they don’t have good intuition about. They are making a judgment not because they think they know what they are doing but because they were asked to.
This is not exactly an uncommon situation in day-to-day life.
Your objection seems to boil down to, “Experimental settings are not identical to actual settings, therefore, everything derived from them is useless.”
It seems pretty unlikely that people will miscalculate probability in an experiment with money on the line, but then always calculate it with perfect accuracy when they are not specifically in an experimental setting.
In short how is the experimental setting so different that we should completely ignore experimental results? If you have a detailed argument for that, then you’d actually be making a point.
I think, in general, they don’t calculate it at all.
They don’t do it in normal life, and they don’t do it in most of the studies either.
In short how is the experimental setting so different that we should completely ignore experimental results?
It’s different in that it’s specifically designed to elicit this mistake. It’s designed to trick people. How do I know this? Well apart from the various arguments I’ve given, they said so in their paper.
So, your position is that it is completely unrealistic to try to trick people, because that never happens in real life? Really? Or is it a moral condemnation of the researchers. They’re bad people, so we shouldn’t acknowledge their results?
People try to trick each other all the time. People try to trick themselves all the time. People routinely make informal estimates of what’s likely to happen. These experiments all show that under certain circumstances, people will systematically be wrong. Not one thing you’ve said goes to counter this. Your entire argument appears predicated on the confusion of “all” with “some.”
If you succeed at tricking people, you can get them to make mistakes.
What those mistakes are is an interesting issue. There is no argument the mistakes actually have anything to do with the conjunction fallacy. They were simply designed to look like they do.
If you succeed at tricking people, you can get them to make mistakes.
This pretty much is the conjunction fallacy. Facts presented in certain ways will cause people to systematically make mistakes. If people did not have a bias in this respect, these tricks would not work. It is going to be hard to get people to think, “Billy won the science fair and is captain of the football team” is more likely than each statement separately, because the representativeness heuristic is not implicated.
There is no argument the mistakes actually have anything to do with the conjunction fallacy.
I have no idea what this means, unless it’s saying, “I’m right and that’s all there is to say.” This is hardly a useful claim. There appears to be rather overwhelming argument on the part of most of the commenters on this forum and the vast majority of psychologists that there is not only such an argument, but that it is compelling.
No. Absolutely not. First of all, the Soviet experiment was done (and has been replicated in other versions) with educated experts. Second of all, it isn’t very technical at all to ask for a probability. People learn these in grade school. How do you think you can begin to talk to the Bayesians when you think probability estimate is a technical term? They use far more difficult math than that as parts of their basic apparatus. Incidentally, this hypothesis also is ruled out by the follow-up studies which have already been linked to in this thread that show that the conjunction fallacy shows up when people are trying to bet.
I have to wonder if your fanatical approach to Popper is causing problems in which you think or act that you think that any criticism, no matter how weak, or ill-informed allows the rejection of a claim until someone responds to that specific criticism. This is not a healthy attitude if one is interested in an exchange of ideas.
Others have addressed most of the issues but I’d like to address another issue.
This isn’t at all relevant. In the Soviet-Poland experiment the two groups were asked to give a probability estimate. Whether they are better in some sense is a distinct claim about whether or not the probability of one or the other is higher. Even if one prefers claims with explanatory power to those that don’t, this doesn’t make those claims more probable. Indeed, the fact that they are easier to falsify in the Popper sense can be thought of as connected in some way to the fact that in a Bayesian sense they are more improbable.
But you can’t argue that because the class of hypotheses is preferred people should be willing to assign a higher probability to them.
“Probability estimate” is a technical term which most people don’t know. That isn’t a bias or a fault. When asked to give one, especially using a phrase like “which is more probable” which is ambiguous (often used in non-mathematical sense. whereas the term “probability estimate” isn’t), they guess what you want and try to give that instead. Make sense so far?
But this is nullified in the colour die experiment of Tversky and Kahneman, as detailed in the Conjunction Fallacy article, as linked by HonoreDB in a comment in this very topic, as so far unaddressed by you.
65% still chose sequence 2 despite it being a conjunction of sequence 1 with another event. There was actual money on the line if the students chose a winning sequence. There is no plausible way that the students could have misinterpreted this question because of ambiguous understandings of phrases involving “probability”.
I find your claim that people interpret “which is more probable” as asking something other than which is more probable to be dubious at best, but it completely fails to explain the above study result.
You’re calling lack of mathematical knowledge a bias? Before it was about how people think, now it’s just ignorance of a concrete skill...?
I’m not calling it anything, and I didn’t use the word “bias” once. Perhaps people who have studied probability theory would have fared better on the task. But this is kind of the point, isn’t it? The issue is that when a person intuitively tries to make a judgement on probabilities, their intuition gives them the wrong result, because it seems to use a heuristic based on representativeness rather than actual probability.
Nobody is saying that this heuristic isn’t most likely more than adequate in the majority of normal day-to-day life. You seem to be conflating the claim that people use some kind of representativeness heuristic, rather than correct probability analysis, in probability estimation problems—which appears to be true, see above—with some kind of strawman claim you have invented that people are stupid idiots whose probability estimation faculties are so bad they reliably get every single probability estimation problem wrong every time, lolmorons. People here are posting in defense of the first one, you keep attacking as if they are defending the second one.
The hypothesis
predicts the results of all the studies strongly. Your contrary hypothesis, which to me appears to be along the lines of
predicts the results of some of the studies, but not the red/green die rolling one.
So now we are waiting for your post hoc justification of how test subjects who knew they were seeking a strictly more probable case (because they were hoping to win money) still got the wrong answer.
Not intuitively. They’re in a situation outside their daily routine that they don’t have good intuition about. They are making a judgment not because they think they know what they are doing but because they were asked to.
Can you see how artificial situations are not representative of people’s ability to cope with life? How putting people in artificial situations designed to fool them is itself a biased technique which lends itself to reaching a particular conclusion?
I understand this criticism when applied to the Linda experiment, but not when it is applied to the color experiment. There was no “trickery” here.
To avoid more post-hockery, please propose a concrete experiment testing the conjunction fallacy, but controlling for “tricks” inherent in the design.
Not all the experiments are the same thing, just because they reached the same conclusion.
The color experiment is different: it’s about asking people to do something they don’t know how to do, and then somehow interpreting that as a bias.
The paper doesn’t describe the test conditions (that makes it unscholarly. scientific papers are supposed to have things like “possible sources of error” sections and describe their experimental procedure carefully. It doesn’t even say if it was double blind. Presumably not. If not, an explanation of why it doesn’t need to be is required but not provided.). There’s various issues there, e.g. pressure not to ask questions could easily have been present.
Setting that aside, it’s putting people in an artificial situation that they don’t understand well and getting them to make a guess at the answer. This doesn’t simulate real life well. It has a consistent bias: people are better at real life than artificial situations designed in such a way that people will fail.
EDIT: Look at this sentence in the 1983 paper:
See. Biased. Specifically designed to fool people. They directly admit it. You never read the paper, right? You just heard a summary of the conclusions and trusted that they had used the methods of science, which they had not.
What that says is that the test in question addresses only one of two obvious questions about the conjunction fallacy. It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice. It’s like if they were trying to find out if overdosing on a certain vitamin can be fatal: They’ll give their (animal) subjects some absurd amount of it, and see what happens, and if that turns out to be fatal, then they go trying different amounts to see where the effect starts. (That way they know to look at, say, kidney function, if that’s what killed the first batch of subjects, rather than having to test all the different body systems to find out what’s going on.) All the first test tells you is whether that kind of overdose is possible at all.
Just because it doesn’t answer every question you’d like it to doesn’t mean that it doesn’t answer any questions.
Do you understand that the situation of “someone using his intelligence to try to fool you” and the situation “living life” are different? Studies about the former do not give results about the latter. The only valid conclusion from this study is “people can sometimes be fooled, on purpose”. But that isn’t the conclusion it claims to support. Being tricked by people intentionally is different than being inherently biased.
The point is “people can be fooled into making this specific mistake”, which is an indication that that specific mistake is one that people do make in some circumstances, rather than that specific mistake not being made at all. (As a counterexample, I imagine that it would be rather hard to trick someone into claiming that if you put two paperclips in a bowl, and then added two more paperclips to that bowl, and then counted the paperclips, you’d find three—that’s a mistake that people don’t make, even if someone tries to trick them.)
“Some circumstances” might only be “when someone is trying to trick them”, but even if that’s true (which the experiment with the dice suggests it’s not) that’s not as far removed from “living life” as you’re trying to claim—people do try to trick each other in real life, and it’s not too unusual to encounter other situations that are just as tricky as dealing with a malicious human.
This is exhausting. Whatever this “real life” of yours is, it’s so boring, predictable, and uniform as to be not worth living. Who hasn’t had to adapt to new (yes, even “artificial”) situations before? Who hasn’t played a game? Who hasn’t bet on the outcome of an event?
Propose an experiment already. My patience is waning.
EDIT: I have, in fact, read the 1983 paper.
You think I have to propose a better experiment? We should just believe the best experiment anyone did, even if it’s no good?
Yes. If there’s no way to test your claim that systemic pseudoscientific practice brought about the conjunction fallacy, how can we know?
I have something like a thirty percent chance of correctly determining whether or not you will judge a given experimental design to be biased. Therefore I surely can’t do it.
No. Of course not.
Ceterum censeo propose an experiment already.
I don’t know of any experiment that would be any good. I think the experiments of this kind should stop, pending a new idea about how to do fundamentally better ones.
Then we’re in invisible dragon territory. I can safely ignore it.
This is not exactly an uncommon situation in day-to-day life.
Your objection seems to boil down to, “Experimental settings are not identical to actual settings, therefore, everything derived from them is useless.”
It seems pretty unlikely that people will miscalculate probability in an experiment with money on the line, but then always calculate it with perfect accuracy when they are not specifically in an experimental setting.
In short how is the experimental setting so different that we should completely ignore experimental results? If you have a detailed argument for that, then you’d actually be making a point.
I think, in general, they don’t calculate it at all.
They don’t do it in normal life, and they don’t do it in most of the studies either.
It’s different in that it’s specifically designed to elicit this mistake. It’s designed to trick people. How do I know this? Well apart from the various arguments I’ve given, they said so in their paper.
So, your position is that it is completely unrealistic to try to trick people, because that never happens in real life? Really? Or is it a moral condemnation of the researchers. They’re bad people, so we shouldn’t acknowledge their results?
People try to trick each other all the time. People try to trick themselves all the time. People routinely make informal estimates of what’s likely to happen. These experiments all show that under certain circumstances, people will systematically be wrong. Not one thing you’ve said goes to counter this. Your entire argument appears predicated on the confusion of “all” with “some.”
If you succeed at tricking people, you can get them to make mistakes.
What those mistakes are is an interesting issue. There is no argument the mistakes actually have anything to do with the conjunction fallacy. They were simply designed to look like they do.
This pretty much is the conjunction fallacy. Facts presented in certain ways will cause people to systematically make mistakes. If people did not have a bias in this respect, these tricks would not work. It is going to be hard to get people to think, “Billy won the science fair and is captain of the football team” is more likely than each statement separately, because the representativeness heuristic is not implicated.
I have no idea what this means, unless it’s saying, “I’m right and that’s all there is to say.” This is hardly a useful claim. There appears to be rather overwhelming argument on the part of most of the commenters on this forum and the vast majority of psychologists that there is not only such an argument, but that it is compelling.
It is unclear how this relates to grandparent’s point.
No. Absolutely not. First of all, the Soviet experiment was done (and has been replicated in other versions) with educated experts. Second of all, it isn’t very technical at all to ask for a probability. People learn these in grade school. How do you think you can begin to talk to the Bayesians when you think probability estimate is a technical term? They use far more difficult math than that as parts of their basic apparatus. Incidentally, this hypothesis also is ruled out by the follow-up studies which have already been linked to in this thread that show that the conjunction fallacy shows up when people are trying to bet.
I have to wonder if your fanatical approach to Popper is causing problems in which you think or act that you think that any criticism, no matter how weak, or ill-informed allows the rejection of a claim until someone responds to that specific criticism. This is not a healthy attitude if one is interested in an exchange of ideas.